11 research outputs found

    Contextual modulation of primary visual cortex by auditory signals

    Get PDF
    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    Decoding face categories in diagnostic subregions of primary visual cortex

    Get PDF
    Higher visual areas in the occipitotemporal cortex contain discrete regions for face processing, but it remains unclear if V1 is modulated by top-down influences during face discrimination, and if this is widespread throughout V1 or localized to retinotopic regions processing task-relevant facial features. Employing functional magnetic resonance imaging (fMRI), we mapped the cortical representation of two feature locations that modulate higher visual areas during categorical judgements – the eyes and mouth. Subjects were presented with happy and fearful faces, and we measured the fMRI signal of V1 regions processing the eyes and mouth whilst subjects engaged in gender and expression categorization tasks. In a univariate analysis, we used a region-of-interest-based general linear model approach to reveal changes in activation within these regions as a function of task. We then trained a linear pattern classifier to classify facial expression or gender on the basis of V1 data from ‘eye’ and ‘mouth’ regions, and from the remaining non-diagnostic V1 region. Using multivariate techniques, we show that V1 activity discriminates face categories both in local ‘diagnostic’ and widespread ‘non-diagnostic’ cortical subregions. This indicates that V1 might receive the processed outcome of complex facial feature analysis from other cortical (i.e. fusiform face area, occipital face area) or subcortical areas (amygdala)

    Transmission of facial expressions of emotion co-evolved with their efficient decoding in the brain: behavioral and brain evidence

    Get PDF
    Competent social organisms will read the social signals of their peers. In primates, the face has evolved to transmit the organism's internal emotional state. Adaptive action suggests that the brain of the receiver has co-evolved to efficiently decode expression signals. Here, we review and integrate the evidence for this hypothesis. With a computational approach, we co-examined facial expressions as signals for data transmission and the brain as receiver and decoder of these signals. First, we show in a model observer that facial expressions form a lowly correlated signal set. Second, using time-resolved EEG data, we show how the brain uses spatial frequency information impinging on the retina to decorrelate expression categories. Between 140 to 200 ms following stimulus onset, independently in the left and right hemispheres, an information processing mechanism starts locally with encoding the eye, irrespective of expression, followed by a zooming out to processing the entire face, followed by a zooming back in to diagnostic features (e.g. the opened eyes in "fear", the mouth in "happy"). A model categorizer demonstrates that at 200 ms, the left and right brain have represented enough information to predict behavioral categorization performance

    Inverse mapping the neuronal correlates of facial expression processing

    No full text
    The brain computations that underlie our ability to recognize facial expressions involve the extraction of relevant information from the faces of our peers, and allow us to readily respond in an appropriate manner to the displayed emotion. Here we present recent advances in understanding the brain processes underlying the categorization of facial expressions of emotion using classification image techniques to link both the brain dynamics (EEG) and the behavioural strategies of three observers with specific facial features

    An improved method to detect damage using modal strain energy based damage index

    Get PDF
    This paper presents two novel concepts to enhance the accuracy of damage detection using the Modal Strain Energy based Damage Index (MSEDI) with the presence of noise in the mode shape data. Firstly, the paper presents a sequential curve fitting technique that reduces the effect of noise on the calculation process of the MSEDI, more effectively than the two commonly used curve fitting techniques; namely, polynomial and Fourier’s series. Secondly, a probability based Generalized Damage Localization Index (GDLI) is proposed as a viable improvement to the damage detection process. The study uses a validated ABAQUS finite-element model of a reinforced concrete beam to obtain mode shape data in the undamaged and damaged states. Noise is simulated by adding three levels of random noise (1%, 3%, and 5%) to the mode shape data. Results show that damage detection is enhanced with increased number of modes and samples used with the GDLI

    Commentary on a combined approach to the problem of developing biomarkers for the prediction of spontaneous preterm labor that leads to preterm birth

    No full text
    INTRODUCTION: Globally, preterm birth has replaced congenital malformation as the major cause of perinatal mortality and morbidity. The reduced rate of congenital malformation was not achieved through a single biophysical or biochemical marker at a specific gestational age, but rather through a combination of clinical, biophysical and biochemical markers at different gestational ages. Since the aetiology of spontaneous preterm birth is also multifactorial, it is unlikely that a single biomarker test, at a specific gestational age will emerge as the definitive predictive test. METHODS: The Biomarkers Group of PREBIC, comprising clinicians, basic scientists and other experts in the field, with a particular interest in preterm birth have produced this commentary with short, medium and long-term aims: i) to alert clinicians to the advances that are being made in the prediction of spontaneous preterm birth; ii) to encourage clinicians and scientists to continue their efforts in this field, and not to be disheartened or nihilistic because of a perceived lack of progress and iii) to enable development of novel interventions that can reduce the mortality and morbidity associated with preterm birth. RESULTS: Using language that we hope is clear to practising clinicians, we have identified 11 Sections in which there exists the potential, feasibility and capability of technologies for candidate biomarkers in the prediction of spontaneous preterm birth and how current limitations to this research might be circumvented. DISCUSSION: The combination of biophysical, biochemical, immunological, microbiological, fetal cell, exosomal, or cell free RNA at different gestational ages, integrated as part of a multivariable predictor model may be necessary to advance our attempts to predict sPTL and PTB. This will require systems biological data using “omics” data and artificial intelligence/machine learning to manage the data appropriately. The ultimate goal is to reduce the mortality and morbidity associated with preterm birth
    corecore